Skip to main content

Risks when humanitarians use AI

Wednesday 15 – Friday 17 May 2024 I WP3368

Female,Doctor,Weighting,Cute,Baby,In,Clinic.,Aleppo,,Syria,October

Participants discussed understanding and mitigating risks and harms that may be exacerbated as humanitarian organisations increasingly rely on AI-powered tools.

In an environment where high-risk decisions need to be made on behalf of vulnerable people, AI has the potential to return errors which may be costly for people whose survival depends on meaningful and high value information.

It is necessary for the sector to better understand harm in a digital world: how do we measure it, how to go beyond the equivalent of physical harm, and what are primary and secondary harms?  How can the humanitarian sector quantify this and put investment into managing the narratives the sector wants in relation to its organisations, and the communities it aims to assist and protect?

Sociocultural risks are often overlooked in the AI agenda, where models are insufficiently adapted to local conditions and narrow knowledge is applied in testing. This can lead to misuse and negative impacts during crises, or to the creation of models that are not generalisable when adapted elsewhere.

Some solutions proposed were to have greater stakeholder engagement throughout the AI project lifecycle, which is complicated with multiple moving parts including project design, and model and system development.

“It is important we are not embedding and reinforcing inequalities”

During project design, if the project is oriented to a particular social setting, there should be a focus on understanding goals and addressing problems. The model development must define the technical output, and management in the project setting. It is important to improve the quality of data, to select and evaluate the model within community participatory processes to ensure transparency. Operationalising the model means training users and seeking continuous feedback following implantation.

Seeking the ‘consent’ of vulnerable people to collect data does not always ensure fairness of access or ownership of data, because of the complexities around AI and the lack of AI literacy in communities. Data justice is a vital concept, with six pillars of power, access, participation, equity, identity, and knowledge which defines how data intersects with social justice. AI must be firmly contextualised within social justice, intersectionality, and global and intercultural considerations. It is about choices, and layers of governance.

“We should not be adopting tools and supply chains without understanding the level of ethics.”

Participants worked in small groups examining case studies to further the discuss the risks and harms that may be exacerbated. This session was created and facilitated in collaboration with The Alan Turing Institute.

Key summary points from these discussions included:

  • Poor data quality and data governance are major risks of harms to individuals in terms of poor outcomes and risks of inequalities in access to services.
  • Poor data governance could expose certain groups to stigma, discrimination, or violence.
  • The failure of AI tools can lead to a lack of trust among people and communities which have implications for wider humanitarian delivery.
  • The social cohesion that occurs naturally when people come together may be dissipated when there is too much emphasis on technology.
  • Overreliance on tools, especially in LLMs, may create greater risks than scaling back and ensuring the technology is fit for purpose.
  • Scaling up AI models can lead to a loss of nuance and specificity at the local level. AI models may not meet actual needs.
  • Relying on historical data does not always provide accurate predictions for the future and can lead to bias.
  • Ambitious technology is hard to get off the ground, and it is easy to go nowhere discussing the issues associated with it.
  • Lack of participation by communities along the entire AI model lifecycle is a problem.
  • Technical challenges may arise when infrastructure goes down.
  • More pressure may be put on the frontline worker who takes decisions based on the systems, but what happens when the system is not perfect or does not work?  There could be fatigue with the AI model.
  • Reputational damage to organisations and a waste of resources are a risk if AI models are unsuccessful.

“Transparency in this space is the lowest possible bar and is critical for any subsequent action.”

Ways to mitigate the risks

  • Put in place a risk matrix and apply a decision framework to support and formalise decision making that includes the experience of frontline workers.
  • Develop operational tools that help users to practically address risks, such as risk impact assessments.
  • Use risk assurance tools to ensure models work as intended.
  • Apply standards and ‘how-to’ guides with people who are engaged across the project cycle.
  • Set up a good M&E framework with feedback loops, including community, patient, and service user groups.
  • Build incentives for learning and adaptation into the governance framework.
  • Compare the proposal for AI with a non-tech solution.
  • Deploy the model in parallel with the existing system, which helps to test the counterfactual and not penalise people who are exposed to it.
  • Build expertise among communities to allow them to engage meaningfully and advise.
  • Aim to build concepts of agency who defines access and who facilitates participation into practical approaches.
  • Create a community of practice around AI in humanitarian contexts with a peer review system and community-based advisory committees that have AI literacy.

Previous

Creating the enabling environment for safe AI uptake

Next

Hostile actors

Want to find out more?

Sign up to our newsletter